213 research outputs found

    Determination of Genetic Relatedness among Selected Rice Cultivars Using Microsatellite Markers for Cultivars Improvement through Marker Assisted Breeding

    Get PDF
    Rice is grown in diverse environmental conditions. In this study, genetic variation among thirteen Iranian and thirteen Malaysian rice cultivars was determined using Microsatellite markers. Microsatellites are polymerase chain reaction (PCR) based and dbla ceelcuoobeoy xoed (DNA) markers which are abundant, co-dominant and widely used in various organisms. This study consisted of two parts, the first part was DNA extraction, which consisted of comparing between four different DNA extraction methods, namely the Dellaporta and CTAB as conventional methods also, Promega and Axyprep as commercial protocols kits. Comparison was also made on the effect of different leaf age as well as leaf position on different quality and yield of DNA obtained. The results of the study showed significant difference (P<0.05) between different extraction methods in relation to optical density OD 260/280 nm and DNA yield from each method. The Dellaporta method (OD260/280=2±0.07nm and DNA yield 2073±196 ng) gave the best results. The positions of different leafs (from top to bottom leaf number 4 to 1) and the ages of leafs (2, 4, 6 and 8 weeks) were also monitored for optimum DNA extraction. The results of the Duncan test showed that there was no significant difference (P>0.05) between leaf positions for 2 to 4 weeks old leaf. However, the age of leaves in young and fresh stages of tissue showed significant difference (P<0.05) in ratio of OD260/280 2±0.03 and DNA yield (1373±70 ng). The results (based on method of extraction, leaf age and position) were used for subsequent DNA extraction of the 26 rice cultivars. The second part consisted of molecular work using twenty one microsatellite primer pairs which were selected from the Gene Bank. The estimation of genetic diversity among two rice groups (Iranian and Malaysian cultivars) were done with the assistance of two softwares UVIdoc (ver.98) and POPGENE (ver.1.31). A total of 21 loci (75 alleles) were observed, of which 20 loci (95.24 %) were polymorphic, except RM338. Microsatellite loci RM1 and RM271 showed the highest polymorphism (between 94 to 136 bp in size). The Polymorphism Information Content (PIC) value was (0.578±0.170). The dendogram constructed based on genetic distance values (UPGMA) grouped the cultivars into five clusters. All of the Iranian rice cultivars were placed in cluster I and III while Malaysian rice cultivars were in clusters IV and V. However cluster II consisted of both Iranian and Malaysian rice cultivars. The results of genetic diversity among selected cultivars in this study can be used for screening of the high grain quality rice accession for backcrossing and breeding programs

    Contrastive Learning of View-Invariant Representations for Facial Expressions Recognition

    Full text link
    Although there has been much progress in the area of facial expression recognition (FER), most existing methods suffer when presented with images that have been captured from viewing angles that are non-frontal and substantially different from those used in the training process. In this paper, we propose ViewFX, a novel view-invariant FER framework based on contrastive learning, capable of accurately classifying facial expressions regardless of the input viewing angles during inference. ViewFX learns view-invariant features of expression using a proposed self-supervised contrastive loss which brings together different views of the same subject with a particular expression in the embedding space. We also introduce a supervised contrastive loss to push the learnt view-invariant features of each expression away from other expressions. Since facial expressions are often distinguished with very subtle differences in the learned feature space, we incorporate the Barlow twins loss to reduce the redundancy and correlations of the representations in the learned representations. The proposed method is a substantial extension of our previously proposed CL-MEx, which only had a self-supervised loss. We test the proposed framework on two public multi-view facial expression recognition datasets, KDEF and DDCF. The experiments demonstrate that our approach outperforms previous works in the area and sets a new state-of-the-art for both datasets while showing considerably less sensitivity to challenging angles and the number of output labels used for training. We also perform detailed sensitivity and ablation experiments to evaluate the impact of different components of our model as well as its sensitivity to different parameters.Comment: Accepted in ACM Transactions on Multimedia Computing, Communications, and Application

    Exploring the Boundaries of Semi-Supervised Facial Expression Recognition: Learning from In-Distribution, Out-of-Distribution, and Unconstrained Data

    Full text link
    Deep learning-based methods have been the key driving force behind much of the recent success of facial expression recognition (FER) systems. However, the need for large amounts of labelled data remains a challenge. Semi-supervised learning offers a way to overcome this limitation, allowing models to learn from a small amount of labelled data along with a large unlabelled dataset. While semi-supervised learning has shown promise in FER, most current methods from general computer vision literature have not been explored in the context of FER. In this work, we present a comprehensive study on 11 of the most recent semi-supervised methods, in the context of FER, namely Pi-model, Pseudo-label, Mean Teacher, VAT, UDA, MixMatch, ReMixMatch, FlexMatch, CoMatch, and CCSSL. Our investigation covers semi-supervised learning from in-distribution, out-of-distribution, unconstrained, and very small unlabelled data. Our evaluation includes five FER datasets plus one large face dataset for unconstrained learning. Our results demonstrate that FixMatch consistently achieves better performance on in-distribution unlabelled data, while ReMixMatch stands out among all methods for out-of-distribution, unconstrained, and scarce unlabelled data scenarios. Another significant observation is that semi-supervised learning produces a reasonable improvement over supervised learning, regardless of whether in-distribution, out-of-distribution, or unconstrained data is utilized as the unlabelled set. We also conduct sensitivity analyses on critical hyper-parameters for the two best methods of each setting

    Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey

    Full text link
    Ubiquitous in-home health monitoring systems have become popular in recent years due to the rise of digital health technologies and the growing demand for remote health monitoring. These systems enable individuals to increase their independence by allowing them to monitor their health from the home and by allowing more control over their well-being. In this study, we perform a comprehensive survey on this topic by reviewing a large number of literature in the area. We investigate these systems from various aspects, namely sensing technologies, communication technologies, intelligent and computing systems, and application areas. Specifically, we provide an overview of in-home health monitoring systems and identify their main components. We then present each component and discuss its role within in-home health monitoring systems. In addition, we provide an overview of the practical use of ubiquitous technologies in the home for health monitoring. Finally, we identify the main challenges and limitations based on the existing literature and provide eight recommendations for potential future research directions toward the development of in-home health monitoring systems. We conclude that despite extensive research on various components needed for the development of effective in-home health monitoring systems, the development of effective in-home health monitoring systems still requires further investigation.Comment: 35 pages, 5 figure

    Human Pose Estimation from Ambiguous Pressure Recordings with Spatio-temporal Masked Transformers

    Full text link
    Despite the impressive performance of vision-based pose estimators, they generally fail to perform well under adverse vision conditions and often don't satisfy the privacy demands of customers. As a result, researchers have begun to study tactile sensing systems as an alternative. However, these systems suffer from noisy and ambiguous recordings. To tackle this problem, we propose a novel solution for pose estimation from ambiguous pressure data. Our method comprises a spatio-temporal vision transformer with an encoder-decoder architecture. Detailed experiments on two popular public datasets reveal that our model outperforms existing solutions in the area. Moreover, we observe that increasing the number of temporal crops in the early stages of the network positively impacts the performance while pre-training the network in a self-supervised setting using a masked auto-encoder approach also further improves the results

    Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations

    Full text link
    We propose EmoDistill, a novel speech emotion recognition (SER) framework that leverages cross-modal knowledge distillation during training to learn strong linguistic and prosodic representations of emotion from speech. During inference, our method only uses a stream of speech signals to perform unimodal SER thus reducing computation overhead and avoiding run-time transcription and prosodic feature extraction errors. During training, our method distills information at both embedding and logit levels from a pair of pre-trained Prosodic and Linguistic teachers that are fine-tuned for SER. Experiments on the IEMOCAP benchmark demonstrate that our method outperforms other unimodal and multimodal techniques by a considerable margin, and achieves state-of-the-art performance of 77.49% unweighted accuracy and 78.91% weighted accuracy. Detailed ablation studies demonstrate the impact of each component of our method.Comment: Under revie

    Consistency-guided Prompt Learning for Vision-Language Models

    Full text link
    We propose Consistency-guided Prompt learning (CoPrompt), a new fine-tuning method for vision-language models that addresses the challenge of improving the generalization capability of large foundation models while fine-tuning them on downstream tasks in a few-shot setting. The basic idea of CoPrompt is to enforce a consistency constraint in the prediction of the trainable and pre-trained models to prevent overfitting on the downstream task. Additionally, we introduce the following two components into our consistency constraint to further boost the performance: enforcing consistency on two perturbed inputs and combining two dominant paradigms of tuning, prompting and adapter. Enforcing consistency on perturbed input further regularizes the consistency constraint, effectively improving generalization, while tuning additional parameters with prompting and adapters improves the performance on downstream tasks. Extensive experiments show that CoPrompt outperforms existing methods on a range of evaluation suites, including base-to-novel generalization, domain generalization, and cross-dataset evaluation tasks. On the generalization task, CoPrompt improves the state-of-the-art by 2.09% on the zero-shot task and 1.93% on the harmonic mean over 11 recognition datasets. Detailed ablation studies show the effectiveness of each of the components in CoPrompt

    Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data

    Full text link
    We propose UnMixMatch, a semi-supervised learning framework which can learn effective representations from unconstrained unlabelled data in order to scale up performance. Most existing semi-supervised methods rely on the assumption that labelled and unlabelled samples are drawn from the same distribution, which limits the potential for improvement through the use of free-living unlabeled data. Consequently, the generalizability and scalability of semi-supervised learning are often hindered by this assumption. Our method aims to overcome these constraints and effectively utilize unconstrained unlabelled data in semi-supervised learning. UnMixMatch consists of three main components: a supervised learner with hard augmentations that provides strong regularization, a contrastive consistency regularizer to learn underlying representations from the unlabelled data, and a self-supervised loss to enhance the representations that are learnt from the unlabelled data. We perform extensive experiments on 4 commonly used datasets and demonstrate superior performance over existing semi-supervised methods with a performance boost of 4.79%. Extensive ablation and sensitivity studies show the effectiveness and impact of each of the proposed components of our method
    corecore